Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available April 1, 2026
-
Agricultural irrigation is a significant contributor to freshwater consumption. However, the current irrigation systems used in the field are not efficient. They rely mainly on soil moisture sensors and the experience of growers but do not account for future soil moisture loss. Predicting soil moisture loss is challenging because it is influenced by numerous factors, including soil texture, weather conditions, and plant characteristics. This article proposes a solution to improve irrigation efficiency, which is calledDRLIC(deep reinforcement learning for irrigation control).DRLICis a sophisticated irrigation system that uses deep reinforcement learning (DRL) to optimize its performance. The system employs a neural network, known as the DRL control agent, which learns an optimal control policy that considers both the current soil moisture measurement and the future soil moisture loss. We introduce an irrigation reward function that enables our control agent to learn from previous experiences. However, there may be instances in which the output of our DRL control agent is unsafe, such as irrigating too much or too little. To avoid damaging the health of the plants, we implement a safety mechanism that employs a soil moisture predictor to estimate the performance of each action. If the predicted outcome is deemed unsafe, we perform a relatively conservative action instead. To demonstrate the real-world application of our approach, we develop an irrigation system that comprises sprinklers, sensing and control nodes, and a wireless network. We evaluate the performance ofDRLICby deploying it in a testbed consisting of six almond trees. During a 15-day in-field experiment, we compare the water consumption ofDRLICwith a widely used irrigation scheme. Our results indicate thatDRLICoutperforms the traditional irrigation method by achieving water savings of up to 9.52%.more » « less
-
In recent years, the focus has been on enhancing user comfort in commercial buildings while cutting energy costs. Efforts have mainly centered on improving HVAC systems, the central control system. However, it’s evident that HVAC alone can’t ensure occupant comfort. Lighting, blinds, and windows, often overlooked, also impact energy use and comfort. This paper introduces a holistic approach to managing the delicate balance between energy efficiency and occupant comfort in commercial buildings. We presentOCTOPUS, a system employing a deep reinforcement learning (DRL) framework using data-driven techniques to optimize control sequences for all building subsystems, including HVAC, lighting, blinds, and windows.OCTOPUS’s DRL architecture features a unique reward function facilitating the exploration of tradeoffs between energy usage and user comfort, effectively addressing the high-dimensional control problem resulting from interactions among these four building subsystems. To meet data training requirements, we emphasize the importance of calibrated simulations that closely replicate target-building operational conditions. We trainOCTOPUSusing 10-year weather data and a calibrated building model in the EnergyPlus simulator. Extensive simulations demonstrate thatOCTOPUSachieves substantial energy savings, outperforming state-of-the-art rule-based and DRL-based methods by 14.26% and 8.1%, respectively, in a LEED Gold Certified building while maintaining desired human comfort levels.more » « less
An official website of the United States government

Full Text Available